符号回归是识别拟合从黑盒过程中观察到的输出的数学表达式的过程。它通常认为是一个离散的优化问题是NP - 硬。解决问题的前提方法包括神经引导的搜索(例如,使用强化学习)和遗传编程。在这项工作中,我们介绍了一种混合神经引导/基因编程方法来象征性回归和其他组合优化问题。我们提出了一种神经引导组件,用于种子随机重启遗传编程组件的起始群体,逐渐学习更好的起始群体。在许多常见的基准任务中从数据集中恢复底层表达式,我们的方法使用相同的实验设置恢复比最近发布的顶部执行模型更多的表达式65%。我们证明在没有对神经引导的组件上的不相互依存的情况下运行许多遗传编程一代,而不是比两个更强烈地耦合的替代配方更好地对象征性回归更好地执行符号回归。最后,我们介绍了一组新的22个符号回归基准问题,而现有的基准难度增加。源代码在www.github.com/brendenpetersen/deep-symbolic -optimization提供。
translated by 谷歌翻译
Prior work has shown that it is possible to expand pretrained Masked Language Models (MLMs) to new languages by learning a new set of embeddings, while keeping the transformer body frozen. Despite learning a small subset of parameters, this approach is not compute-efficient, as training the new embeddings requires a full forward and backward pass over the entire model. In this work, we propose mini-model adaptation, a compute-efficient alternative that builds a shallow mini-model from a fraction of a large model's parameters. New language-specific embeddings can then be efficiently trained over the mini-model, and plugged into the aligned large model for rapid cross-lingual transfer. We explore two approaches to learn mini-models: MiniJoint, which jointly pretrains the primary model and the mini-model using a single transformer with a secondary MLM head at a middle layer; and MiniPost, where we start from a regular pretrained model and build a mini-model by extracting and freezing a few layers and learning a small number of parameters on top. Experiments on XNLI, MLQA and PAWS-X show that mini-model adaptation matches the performance of the standard approach using up to 2.4x less compute.
translated by 谷歌翻译
While prior work has established that the use of parallel data is conducive for cross-lingual learning, it is unclear if the improvements come from the data itself, or if it is the modeling of parallel interactions that matters. Exploring this, we examine the usage of unsupervised machine translation to generate synthetic parallel data, and compare it to supervised machine translation and gold parallel data. We find that even model generated parallel data can be useful for downstream tasks, in both a general setting (continued pretraining) as well as the task-specific setting (translate-train), although our best results are still obtained using real parallel data. Our findings suggest that existing multilingual models do not exploit the full potential of monolingual data, and prompt the community to reconsider the traditional categorization of cross-lingual learning approaches.
translated by 谷歌翻译
Scaling up language models has led to unprecedented performance gains, but little is understood about how the training dynamics change as models get larger. How do language models of different sizes learn during pre-training? Why do larger language models demonstrate more desirable behaviors? In this paper, we analyze the intermediate training checkpoints of differently sized OPT models (Zhang et al.,2022)--from 125M to 175B parameters--on next-token prediction, sequence-level generation, and downstream tasks. We find that 1) at a given perplexity and independent of model sizes, a similar subset of training tokens see the most significant reduction in loss, with the rest stagnating or showing double-descent behavior; 2) early in training, all models learn to reduce the perplexity of grammatical sequences that contain hallucinations, with small models halting at this suboptimal distribution and larger ones eventually learning to assign these sequences lower probabilities; 3) perplexity is a strong predictor of in-context learning performance on 74 multiple-choice tasks from BIG-Bench, and this holds independent of the model size. Together, these results show that perplexity is more predictive of model behaviors than model size or training computation.
translated by 谷歌翻译
随着网络攻击和网络间谍活动的增长,如今需要更好,更强大的入侵检测系统(IDS)的需求更加有必要。 ID的基本任务是在检测Internet的攻击方面充当第一道防线。随着入侵者的入侵策略变得越来越复杂且难以检测,研究人员已经开始应用新颖的机器学习(ML)技术来有效地检测入侵者,从而保留互联网用户对整个互联网网络安全的信息和整体信任。在过去的十年中,基于ML和深度学习(DL)架构的侵入检测技术的爆炸激增,这些架构在各种基于网络安全的数据集上,例如DARPA,KDDCUP'99,NSL-KDD,CAIDA,CAIDA,CTU--- 13,UNSW-NB15。在这项研究中,我们回顾了当代文献,并提供了对不同类型的入侵检测技术的全面调查,该技术将支持向量机(SVMS)算法作为分类器。我们仅专注于在网络安全中对两个最广泛使用的数据集进行评估的研究,即KDDCUP'99和NSL-KDD数据集。我们提供了每种方法的摘要,确定了SVMS分类器的作用以及研究中涉及的所有其他算法。此外,我们以表格形式对每种方法进行了批判性综述,突出了所调查的每种方法的性能指标,优势和局限性。
translated by 谷歌翻译
车载传感器的车载系统正在增强连接。这使信息共享能够实现对环境的更全面的理解。但是,通过公共蜂窝网络的同行通信带来了多个网络障碍以解决,需要网络系统来中继通信并连接无法直接连接的各方。 Web实时通信(WEBRTC)是跨车辆流媒体流媒体的良好候选者,因为它可以使延迟通信较低,同时将标准协议带到安全握手中,发现公共IP和横向网络地址转换(NAT)系统。但是,在基础架构中的端到端服务质量(QOS)适应,在该基础架构中,传输和接收是通过继电器解耦的,需要一种机制来有效地使视频流适应网络容量。为此,本文通过利用实时运输控制协议(RTCP)指标(例如带宽和往返时间)来调查解决分辨率,帧和比特率更改的机制。该解决方案旨在确保接收机上系统及时获得相关信息。在实际的5G测试台中分析了应用不同方法适应方法时对端到端吞吐量效率和反应时间的影响。
translated by 谷歌翻译
机器学习容易受到对抗操作的影响。先前的文献表明,在训练阶段,攻击者可以操纵数据和数据采样程序以控制模型行为。一个共同的攻击目标是种植后门,即迫使受害者模型学会识别只有对手知道的触发因素。在本文中,我们引入了一类新的后门攻击类,这些攻击隐藏在模型体系结构内,即在用于训练的功能的电感偏置中。这些后门很容易实现,例如,通过为其他人将在不知不觉中重复使用的后式模型体系结构发布开源代码。我们证明,模型架构后门代表了一个真正的威胁,与其他方法不同,可以从头开始进行完整的重新训练。我们将建筑后门背后的主要构建原理(例如输入和输出之间的链接)形式化,并描述对它们的一些可能的保护。我们评估了对不同尺度的计算机视觉基准测试的攻击,并证明在各种培训环境中,潜在的脆弱性无处不在。
translated by 谷歌翻译
预先训练的蒙版语言模型通过将下游任务作为文本填充来成功执行几次学习。但是,作为全镜头环境中的强大替代方案,诸如Electra之类的判别预训练模型不适合范式。在这项工作中,我们调整了基于及时的几次学习来进行电信,并表明它在广泛的任务中优于蒙面的语言模型。Electra是预先训练的,以区分令牌是产生还是原始。我们自然地将其扩展到基于迅速的几次学习,通过培训来评分目标选项的原创性,而无需引入新参数。我们的方法很容易适应涉及多token预测的任务,而无需额外的计算开销。分析表明,Electra学习分布与下游任务更好。
translated by 谷歌翻译
Multilingual machine translation suffers from negative interference across languages. A common solution is to relax parameter sharing with language-specific modules like adapters. However, adapters of related languages are unable to transfer information, and their total number of parameters becomes prohibitively expensive as the number of languages grows. In this work, we overcome these drawbacks using hyper-adapters -- hyper-networks that generate adapters from language and layer embeddings. While past work had poor results when scaling hyper-networks, we propose a rescaling fix that significantly improves convergence and enables training larger hyper-networks. We find that hyper-adapters are more parameter efficient than regular adapters, reaching the same performance with up to 12 times less parameters. When using the same number of parameters and FLOPS, our approach consistently outperforms regular adapters. Also, hyper-adapters converge faster than alternative approaches and scale better than regular dense networks. Our analysis shows that hyper-adapters learn to encode language relatedness, enabling positive transfer across languages.
translated by 谷歌翻译
亚细胞蛋白的自动视觉定位可以加速我们对健康和疾病中细胞功能的理解。尽管机器学习最近取得了进步(ML),但人类仍然通过使用各种视觉提示获得了卓越的准确性。我们通过解决三个关键方面可以缩小这一差距:(i)单元注释质量的自动改善,(ii)支持不平衡和嘈杂数据的新的深神经网络(DNN)体系结构,以及(iii)知情的选择和融合。多种机器学习模型。我们介绍了一种新的``Ai-Trains-ai''方法,用于提高弱标签的质量,并提出了利用小波过滤器和Weibull激活的新型DNN体系结构。我们还通过分析图像级和细胞级预测之间的相关性来探索多-DNN结合过程中的关键因素。最后,在人类蛋白质地图集的背景下,我们证明了我们的系统在多标签的单细胞单细胞分类中实现了蛋白质定位模式的最新性能,同时增强了概括能力。
translated by 谷歌翻译